Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available January 1, 2026
-
Abstract BackgroundDissecting the neurobiology of dance would shed light on a complex, yet ubiquitous, form of human communication. In this experiment, we sought to study, via mobile electroencephalography (EEG), the brain activity of five experienced dancers while dancing butoh, a postmodern dance that originated in Japan. ResultsWe report the experimental design, methods, and practical execution of a highly interdisciplinary project that required the collaboration of dancers, engineers, neuroscientists, musicians, and multimedia artists, among others. We explain in detail how we technically validated all our EEG procedures (e.g., via impedance value monitoring) and minimized potential artifacts in our recordings (e.g., via electrooculography and inertial measurement units). We also describe the engineering details and hardware that enabled us to achieve synchronization between signals recorded at different sampling frequencies, along with a signal preprocessing and denoising pipeline that we used for data re-sampling and power line noise removal. As our experiment culminated in a live performance, where we generated a real-time visualization of the dancers’ interbrain synchrony on a screen via an artistic brain-computer interface, we outline all the methodology (e.g., filtering, time-windows, equation) we used for online bispectrum estimations. Additionally, we provide access to all the raw EEG data and codes we used in our recordings. We, lastly, discuss how we envision that the data could be used to address several hypotheses, such as that of interbrain synchrony or the motor theory of vocal learning. ConclusionsBeing, to our knowledge, the first study to report synchronous and simultaneous recording from five dancers, we expect that our findings will inform future art-science collaborations, as well as dance-movement therapies.more » « lessFree, publicly-accessible full text available December 1, 2025
-
Abstract BackgroundThis research focused on the development of a motor imagery (MI) based brain–machine interface (BMI) using deep learning algorithms to control a lower-limb robotic exoskeleton. The study aimed to overcome the limitations of traditional BMI approaches by leveraging the advantages of deep learning, such as automated feature extraction and transfer learning. The experimental protocol to evaluate the BMI was designed as asynchronous, allowing subjects to perform mental tasks at their own will. MethodsA total of five healthy able-bodied subjects were enrolled in this study to participate in a series of experimental sessions. The brain signals from two of these sessions were used to develop a generic deep learning model through transfer learning. Subsequently, this model was fine-tuned during the remaining sessions and subjected to evaluation. Three distinct deep learning approaches were compared: one that did not undergo fine-tuning, another that fine-tuned all layers of the model, and a third one that fine-tuned only the last three layers. The evaluation phase involved the exclusive closed-loop control of the exoskeleton device by the participants’ neural activity using the second deep learning approach for the decoding. ResultsThe three deep learning approaches were assessed in comparison to an approach based on spatial features that was trained for each subject and experimental session, demonstrating their superior performance. Interestingly, the deep learning approach without fine-tuning achieved comparable performance to the features-based approach, indicating that a generic model trained on data from different individuals and previous sessions can yield similar efficacy. Among the three deep learning approaches compared, fine-tuning all layer weights demonstrated the highest performance. ConclusionThis research represents an initial stride toward future calibration-free methods. Despite the efforts to diminish calibration time by leveraging data from other subjects, complete elimination proved unattainable. The study’s discoveries hold notable significance for advancing calibration-free approaches, offering the promise of minimizing the need for training trials. Furthermore, the experimental evaluation protocol employed in this study aimed to replicate real-life scenarios, granting participants a higher degree of autonomy in decision-making regarding actions such as walking or stopping gait.more » « less
-
Understanding and predicting others' actions in ecological settings is an important research goal in social neuroscience. Here, we deployed a mobile brain-body imaging (MoBI) methodology to analyze inter-brain communication between professional musicians during a live jazz performance. Specifically, bispectral analysis was conducted to assess the synchronization of scalp electroencephalographic (EEG) signals from three expert musicians during a three-part 45 minute jazz performance, during which a new musician joined every five minutes. The bispectrum was estimated for all musician dyads, electrode combinations, and five frequency bands. The results showed higher bispectrum in the beta and gamma frequency bands (13-50 Hz) when more musicians performed together, and when they played a musical phrase synchronously. Positive bispectrum amplitude changes were found approximately three seconds prior to the identified synchronized performance events suggesting preparatory cortical activity predictive of concerted behavioral action. Moreover, a higher amount of synchronized EEG activity, across electrode regions, was observed as more musicians performed, with inter-brain synchronization between the temporal, parietal, and occipital regions the most frequent. Increased synchrony between the musicians' brain activity reflects shared multi-sensory processing and movement intention in a musical improvisation task.more » « less
-
Abstract Dissecting the neurobiology of dance would shed light on a complex, yet ubiquitous, form of human communication. In this experiment, we sought to study, via mobile electroencephalography (EEG), the brain activity of five experienced dancers while dancing butoh, a postmodern dance that originated in Japan. We report the experimental design, methods, and practical execution of a highly interdisciplinary project that required the collaboration of dancers, engineers, neuroscientists, musicians, and multimedia artists, among others. We explain in detail how we technically validated all our EEG procedures (e.g., via impedance value monitoring) and how we minimized potential artifacts in our recordings (e.g., via electrooculography and inertial measurement units). We also describe the engineering details and hardware that enabled us to achieve synchronization between signals recorded in different sampling frequencies, and a signal preprocessing and denoising pipeline that we have used to re-sample our data and remove power line noise. As our experiment culminated in a live performance, where we generated a real-time visualization of the dancers’ interbrain synchrony on a screen via an artistic brain-computer interface, we outline all the methodology (e.g., filtering, time-windows, equation) we used for online bispectrum estimations. We also share all the raw EEG data and codes we used in our recordings. We, lastly, describe how we envision that the data could be used to address several hypotheses, such as that of interbrain synchrony or the motor theory of vocal learning. Being, to our knowledge, the first study to report synchronous and simultaneous recording from five dancers, we expect that our findings will inform future art-science collaborations, as well as dance-movement therapies.more » « less
-
Understanding and predicting others' actions in ecological settings is an important research goal in social neuroscience. Here, we deployed a mobile brain-body imaging (MoBI) methodology to analyze inter-brain communication between professional musicians during a live jazz performance. Specifically, bispectral analysis was conducted to assess the synchronization of scalp electroencephalographic (EEG) signals from three expert musicians during a three-part 45 minute jazz performance, during which a new musician joined every five minutes. The bispectrum was estimated for all musician dyads, electrode combinations, and five frequency bands. The results showed higher bispectrum in the beta and gamma frequency bands (13-50 Hz) when more musicians performed together, and when they played a musical phrase synchronously. Positive bispectrum amplitude changes were found approximately three seconds prior to the identified synchronized performance events suggesting preparatory cortical activity predictive of concerted behavioral action. Moreover, a higher amount of synchronized EEG activity, across electrode regions, was observed as more musicians performed, with inter-brain synchronization between the temporal, parietal, and occipital regions the most frequent. Increased synchrony between the musicians' brain activity reflects shared multi-sensory processing and movement intention in a musical improvisation task.more » « less
-
Abstract Objective. Understanding neural activity patterns in the developing brain remains one of the grand challenges in neuroscience. Developing neural networks are likely to be endowed with functionally important variability associated with the environmental context, age, gender, and other variables. Therefore, we conducted experiments with typically developing children in a stimulating museum setting and tested the feasibility of using deep learning techniques to help identify patterns of brain activity associated with different conditions.Approach. A four-channel dry EEG-based Mobile brain-body imaging data of children at rest and during videogame play (VGP) was acquired at the Children’s Museum of Houston. A data-driven approach based on convolutional neural networks (CNN) was used to describe underlying feature representations in the EEG and their ability to discern task and gender. The variability of the spectral features of EEG during the rest condition as a function of age was also analyzed.Main results. Alpha power (7–13 Hz) was higher during rest whereas theta power (4–7 Hz) was higher during VGP. Beta (13–18 Hz) power was the most significant feature, higher in females, when differentiating between males and females. Using data from both temporoparietal channels to classify between VGP and rest condition, leave-one-subject-out cross-validation accuracy of 67% was obtained. Age-related changes in EEG spectral content during rest were consistent with previous developmental studies conducted in laboratory settings showing an inverse relationship between age and EEG power.Significance. These findings are the first to acquire, quantify and explain brain patterns observed during VGP and rest in freely behaving children in a museum setting using a deep learning framework. The study shows how deep learning can be used as a data driven approach to identify patterns in the data and explores the issues and the potential of conducting experiments involving children in a natural and engaging environment.more » « less
An official website of the United States government
